22 research outputs found

    OpenFPM: A scalable environment for particle and particle-mesh codes on parallel computers

    Get PDF
    Scalable and efficient numerical simulations continue to gain importance, as computation is firmly established tool of discovery, together with theory and experiment. Meanwhile, the performance of computing hardware grows with increasing heterogeneous hardware, enabling simulations of ever more complex models. However, efficiently implementing scalable codes on heterogeneous, distributed hardware systems becomes the bottleneck. This bottleneck can be alleviated by intermediate software layers that provide higher-level abstractions closer to the problem domain, hence allowing the computational scientist to focus on the simulation. Here, we present OpenFPM, an open and scalable framework that provides an abstraction layer for numerical simulations using particles and/or meshes. OpenFPM provides transparent and scalable infrastructure for shared-memory and distributed-memory implementations of particles-only and hybrid particle-mesh simulations of both discrete and continuous models, as well as non-simulation codes. This infrastructure is complemented with frequently used numerical routines, as well as interfaces to third-party libraries. This thesis will present the architecture and design of OpenFPM, detail the underlying abstractions, and benchmark the framework in applications ranging from Smoothed-Particle Hydrodynamics (SPH) to Molecular Dynamics (MD), Discrete Element Methods (DEM), Vortex Methods, stencil codes, high-dimensional Monte Carlo sampling (CMA-ES), and Reaction-Diffusion solvers, comparing it to the current state of the art and existing software frameworks

    QCD simulations with staggered fermions on GPUs

    Full text link
    We report on our implementation of the RHMC algorithm for the simulation of lattice QCD with two staggered flavors on Graphics Processing Units, using the NVIDIA CUDA programming language. The main feature of our code is that the GPU is not used just as an accelerator, but instead the whole Molecular Dynamics trajectory is performed on it. After pointing out the main bottlenecks and how to circumvent them, we discuss the obtained performances. We present some preliminary results regarding OpenCL and multiGPU extensions of our code and discuss future perspectives.Comment: 22 pages, 14 eps figures, final version to be published in Computer Physics Communication

    A Distributed Algebra System for Time Integration on Parallel Computers

    Full text link
    We present a distributed algebra system for efficient and compact implementation of numerical time integration schemes on parallel computers and graphics processing units (GPU). The software implementation combines the time integration library Odeint from Boost with the OpenFPM framework for scalable scientific computing. Implementing multi-stage, multi-step, or adaptive time integration methods in distributed-memory parallel codes or on GPUs is challenging. The present algebra system addresses this by making the time integration methods from Odeint available in a concise template-expression language for numerical simulations distributed and parallelized using OpenFPM. This allows using state-of-the-art time integration schemes, or switching between schemes, by changing one line of code, while maintaining parallel scalability. This enables scalable time integration with compact code and facilitates rapid rewriting and deployment of simulation algorithms. We benchmark the present software for exponential and sigmoidal dynamics and present an application example to the 3D Gray-Scott reaction-diffusion problem on both CPUs and GPUs in only 60 lines of code

    A language and development environment for parallel particle methods

    Get PDF
    We present the Parallel Particle-Mesh Environment (PPME), a domain-specific language (DSL) and development environment for numerical simulations using particles and hybrid particle-mesh methods. PPME is the successor of the Parallel Particle-Mesh Language (PPML), a Fortran-based DSL that provides high-level abstrac- tions for the development of distributed-memory particle-mesh simulations. On top of PPML, PPME provides a complete development environment for particle-based simu- lations usin state-of-the-art language engineering and compiler construction techniques. Relying on a novel domain metamodel and formal type system for particle methods, it enables advanced static code correctness checks at the level of particle abstractions, com- plementing the low-level analysis of the compiler. Furthermore, PPME adopts Herbie for improving the accuracy of floating-point expressions and supports a convenient high-level mathematical notation for equations and differential operators. For demonstration purposes, we discuss an example from Discrete Element Methods (DEM) using the classic Silbert model to simulate granular flows

    The Software Architecture and development approach for the ASTRI Mini-Array gamma-ray air-Cherenkov experiment at the Observatorio del Teide

    Get PDF
    The ASTRI Mini-Array is an international collaboration led by the Italian National Institute for Astrophysics (INAF) and devoted to the imaging of atmospheric Cherenkov light for very-high gamma-ray astronomy. The project is deploying an array of 9 telescopes sensitive above 1 TeV. In this contribution, we present the architecture of the software that covers the entire life cycle of the observatory, from scheduling to remote operations and data dissemination. The high-speed networking connection available between the observatory site, at the Canary Islands, and the Data Center in Rome allows for ready data availability for stereo triggering and data processing

    OpenFPM: A scalable environment for particle and particle-mesh codes on parallel computers

    No full text
    Scalable and efficient numerical simulations continue to gain importance, as computation is firmly established tool of discovery, together with theory and experiment. Meanwhile, the performance of computing hardware grows with increasing heterogeneous hardware, enabling simulations of ever more complex models. However, efficiently implementing scalable codes on heterogeneous, distributed hardware systems becomes the bottleneck. This bottleneck can be alleviated by intermediate software layers that provide higher-level abstractions closer to the problem domain, hence allowing the computational scientist to focus on the simulation. Here, we present OpenFPM, an open and scalable framework that provides an abstraction layer for numerical simulations using particles and/or meshes. OpenFPM provides transparent and scalable infrastructure for shared-memory and distributed-memory implementations of particles-only and hybrid particle-mesh simulations of both discrete and continuous models, as well as non-simulation codes. This infrastructure is complemented with frequently used numerical routines, as well as interfaces to third-party libraries. This thesis will present the architecture and design of OpenFPM, detail the underlying abstractions, and benchmark the framework in applications ranging from Smoothed-Particle Hydrodynamics (SPH) to Molecular Dynamics (MD), Discrete Element Methods (DEM), Vortex Methods, stencil codes, high-dimensional Monte Carlo sampling (CMA-ES), and Reaction-Diffusion solvers, comparing it to the current state of the art and existing software frameworks

    OpenFPM: A scalable environment for particle and particle-mesh codes on parallel computers

    Get PDF
    Scalable and efficient numerical simulations continue to gain importance, as computation is firmly established tool of discovery, together with theory and experiment. Meanwhile, the performance of computing hardware grows with increasing heterogeneous hardware, enabling simulations of ever more complex models. However, efficiently implementing scalable codes on heterogeneous, distributed hardware systems becomes the bottleneck. This bottleneck can be alleviated by intermediate software layers that provide higher-level abstractions closer to the problem domain, hence allowing the computational scientist to focus on the simulation. Here, we present OpenFPM, an open and scalable framework that provides an abstraction layer for numerical simulations using particles and/or meshes. OpenFPM provides transparent and scalable infrastructure for shared-memory and distributed-memory implementations of particles-only and hybrid particle-mesh simulations of both discrete and continuous models, as well as non-simulation codes. This infrastructure is complemented with frequently used numerical routines, as well as interfaces to third-party libraries. This thesis will present the architecture and design of OpenFPM, detail the underlying abstractions, and benchmark the framework in applications ranging from Smoothed-Particle Hydrodynamics (SPH) to Molecular Dynamics (MD), Discrete Element Methods (DEM), Vortex Methods, stencil codes, high-dimensional Monte Carlo sampling (CMA-ES), and Reaction-Diffusion solvers, comparing it to the current state of the art and existing software frameworks

    A C++ expression system for partial differential equations enables generic simulations of biological hydrodynamics

    No full text
    We present a user-friendly and intuitive C++ expression system to implement numerical simulations of continuum biological hydrodynamics. The expression system allows writing simulation programs in near-mathematical notation and makes codes more readable, more compact, and less error-prone. It also cleanly separates the implementation of the partial differential equation model from the implementation of the numerical methods used to discretize it. This allows changing either of them with minimal changes to the source code. The presented expression system is implemented in the high-performance computing platform OpenFPM, supporting simulations that transparently parallelize on multi-processor computer systems. We demonstrate that our expression system makes it easier to write scalable codes for simulating biological hydrodynamics in space and time. We showcase the present framework in numerical simulations of active polar fluids, as well as in classic simulations of fluid dynamics from the incompressible Navier–Stokes equations to Stokes flow in a ball. The presented expression system accelerates scalable simulations of spatio-temporal models that encode the physics and material properties of tissues in order to algorithmically study morphogenesis

    Mesh-free collocation for surface differential operators

    Full text link
    We present a mesh-free collocation scheme to discretize intrinsic surface differential operators over surface point clouds with given normal vectors. The method is based on Discretization-Corrected Particle Strength Exchange (DC-PSE), which generalizes finite difference methods to mesh-free point clouds and moving Lagrangian particles. The resulting Surface DC-PSE method is derived from an embedding theorem, but we analytically reduce the operator kernels along the surface normals, resulting in an embedding-free, purely surface-intrinsic computational scheme. We benchmark the scheme by discretizing the Laplace-Beltrami operator on a circle and a sphere, and present convergence results for both explicit and implicit solvers. We then showcase the algorithm on the problem of computing mean curvature of an ellipsoid and of the Stanford Bunny by evaluating the surface divergence of the normal vector field with the proposed Surface DC-PSE method.Comment: 15 pages, 4 figures, 28 reference
    corecore